Using NVIDIA Jetson Modules

From RidgeRun Developer Wiki






Support for NVIDIA Jetson

NVIDIA Jetson modules and developer kits do not have native support for DPDK due to the NIC manufacturer. In the case of the Jetson Xavier modules, the ethernet interface is based on the Marvell 88E1512PB2, whereas the Jetson Orin modules are based on a Realtek module. In both cases, there is no support for kernel bypassing. More information can be found in the DPDK Compatible Hardware site.

For NVIDIA Jetson modules, it is recommended that an external PCIe or an M2-based network card compatible with DPDK be installed. NVIDIA ConnectX can be used to boost Jetson's networking, providing DPDK and RDMA support.

RidgeRun Use Case

At RidgeRun, we are currently testing the Intel I210-T1 at 1 Gbps for DPDK in both X86 and NVIDIA Jetson Systems


Stay tuned for the updates!

PCIe network card (Intel I210-T1) on Jetson AGX Orin

NIC Usage

1. Connect the NIC: Attach the NIC to the Jetson AGX Orin using the external PCIe port and power on the device.

2. Check Ethernet Interfaces: After booting up, run the following command to check the available network interfaces:

nvidia@ubuntu:~$ ip addr

And you should get a result similar to this:

...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:b7:85:1f:ca:d7 brd ff:ff:ff:ff:ff:ff
    altname enP5p1s0
    inet 192.168.100.112/24 brd 192.168.100.255 scope global dynamic noprefixroute eth0
       valid_lft 86265sec preferred_lft 86265sec
    inet6 fe80::a8ac:52de:acc:c9b9/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
...
6: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1466 qdisc mq state DOWN group default qlen 1000
    link/ether 48:b0:2d:78:ba:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global eth1
       valid_lft forever preferred_lft forever
...

Here, eth0 and eth1 interfaces should appear.


Info
Note that eth1 may be assigned the IP address 192.168.100.1, which is often the default gateway address, and may cause connectivity issues.


2.1 Troubleshoot Connectivity Issues: If there’s no connectivity (no ping response or SSH access to the board):

  • Open Settings > Network.
  • Check for a duplicate entry of the eth1 interface. This duplicate entry might be manually assigning the IP address 192.168.100.1, leading to a conflict.
  • Remove the Duplicate Interface:
    • In Settings > Network, locate eth1.
    • Click the gear icon next to eth1 to access configuration options.
    • Select Remove Connection Profile to delete the duplicate entry.
The image shows the Ubuntu setting window, with two overlay boxes pointing out the location of the network option and the duplicated eth1 options icon
Duplicated eth1 interface in Network options


The image shows the eth1 options menu in the Ubuntu setting window, with an overlay box pointing out the location delete button
Duplicated eth1 options menu


3. Verify Connectivity: Once the duplicate profile is removed, the conflict should be resolved, restoring normal connectivity to the board.

DPDK Intallation

Follow the instructions in Setup Data Plane Kit/From source to install DPDK on your system

Kernel and Device Tree Changes Required

In order to use DPDK on a Jetson AGX Orin with the Intel I210-T1 NIC, there are some requirements at the Kernel and device tree (dtb) level, to include these changes on the target board follow the next steps.


Compiled Kernel image, device tree, and modules
You can download from here the Kernel image, device tree, and modules (containing uio_pci_generic module and dtb changes) compiled from the following steps.


Environment setup for building the Kernel, dtb and modules

1. Download Linux for Tegra

Follow the instructions from this wiki to install Linux for Tegra using SDKmanager. For this case Jetpack 6.0 will be used.

2. Download the NVIDIA's Driver Package (BSP) sources

Download the source files from this link.

The file you need to download is under Downloads and Links -> SOURCES -> Driver Package (BSP) Sources. It will download a file named public_source.tbz2

Use the following commands to extract the files under the Linux for Tegra directory:

# Go into the target HW image folder created in step 1.
cd <target HW image folder>/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
cp ~/Downloads/public_sources.tbz2 .

tar -xjf public_sources.tbz2 Linux_for_Tegra/source/kernel_src.tbz2 --strip-components 2
tar -xjf public_sources.tbz2 Linux_for_Tegra/source/kernel_oot_modules_src.tbz2 --strip-components 2
tar -xjf public_sources.tbz2 Linux_for_Tegra/source/nvidia_kernel_display_driver_source.tbz2 --strip-components 2

mkdir sources
tar -xjf kernel_src.tbz2 -C sources
tar -xjf kernel_oot_modules_src.tbz2 -C sources
tar -xjf nvidia_kernel_display_driver_source.tbz2 -C sources

After the steps above are done, your directory should look like the following:

~/nvidia/nvidia_sdk/JetPack_6.0_DP_Linux_DP_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/sources$ tree -L 1
├── generic_rt_build.sh
├── hardware
├── hwpm
├── kernel
├── kernel_oot_modules_src.tbz2
├── kernel_src_build_env.sh
├── kernel_src.tbz2
├── Makefile
├── nvbuild.sh
├── nvcommon_build.sh
├── nvdisplay
├── nvethernetrm
├── nvgpu
├── nvidia_kernel_display_driver_source.tbz2
├── nvidia-oot
├── out
└── public_sources.tbz2

3. Set the Development Environment

3.1. Install dependencies in the host PC. Make sure the following dependencies are installed on your system:

  • wget
  • lbzip2
  • build-essential
  • bc
  • zip
  • libgmp-dev
  • libmpfr-dev
  • libmpc-dev
  • vim-common # For xxd

In Debian based systems you can run the following:

sudo apt install wget lbzip2 build-essential bc zip libgmp-dev libmpfr-dev libmpc-dev vim-common

3.2. Get the Toolchain:

If you haven't already, download the toolchain. The toolchain is the set of tools required to cross-compile the Linux kernel. You can download the Bootlin Toolchain gcc 11.3 from the linux-archive.

The file is under Downloads and Links -> TOOLS -> Bootlin Toolchain gcc 11.3. It will download a file named aarch64--glibc--stable-2022.08-1.tar.bz2.

Extract the files with these commands:

cd $HOME
mkdir -p $HOME/l4t-gcc
cd $HOME/l4t-gcc

cp ~/Downloads/aarch64--glibc--stable-2022.08-1.tar.bz2
tar -xjf aarch64--glibc--stable-2022.08-1.tar.bz2

3.3. Export the Environment variables:

Open a terminal and run the following commands to export the environment variables that will be used in the next steps. Please keep in mind that if the sources are in a different directory than the one defined in DEVDIR, a different DEVDIR needs to be defined, always pointing at the directory where the Linux_for_Tegra folder is.

export DEVDIR=~/nvidia/nvidia_sdk/JetPack_6.0_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra
export CROSS_COMPILE=$HOME/l4t-gcc/aarch64--glibc--stable-2022.08-1/bin/aarch64-buildroot-linux-gnu-
export INSTALL_MOD_PATH=$DEVDIR/rootfs/
export KERNEL_HEADERS=$DEVDIR/sources/kernel/kernel-jammy-src
Adding the uio_pci_generic Driver

Include the uio_pci_generic driver to the kernel configuration, in the file $DEVDIR/sources/kernel/kernel-jammy-src/arch/arm64/configs/defconfig add the following line:

CONFIG_UIO_PCI_GENERIC=m
Disable IOMMU on PCIe C5 RP

Remove the iommu-map, iommu-map-mask, and dma-coherent properties from the node pcie@141a0000 on the device tree located at $DEVDIR/sources/hardware/nvidia/t23x/nv-public/tegra234.dtsi, it should look like the following:

pcie@141a0000 {
	compatible = "nvidia,tegra234-pcie";
	...
	interconnect-names = "dma-mem", "write";
	//iommu-map = <0x0 &smmu_niso0 TEGRA234_SID_PCIE5 0x1000>;
	//iommu-map-mask = <0x0>;
	//dma-coherent;

	status = "disabled";
};

Remove the iommus property from the pcie@141a0000 node on the device tree located at $DEVDIR/sources/hardware/nvidia/t23x/nv-public/nv-soc/tegra234-base-overlay.dtsi, the node should look like this:

pcie@141a0000 {
	//iommus = <&smmu_niso0 TEGRA234_SID_PCIE5>;
};

Disable SMMU bypass on the kernel configuration, in the file $DEVDIR/sources/kernel/kernel-jammy-src/arch/arm64/configs/defconfig add the following line:

CONFIG_ARM_SMMU_DISABLE_BYPASS_BY_DEFAULT=n
Compile the Kernel, dtb and Modules

Go to the kernel directory:

cd $DEVDIR/sources/kernel/kernel-jammy-src

Execute the default configuration. You can enable any kernel configuration if you wish:

make menuconfig

Go to the sources directory:

cd $DEVDIR/sources

Compile the kernel:

make -C kernel

Install the kernel:

sudo -E make install -C kernel

Compile the Out-of-Tree modules:

make modules

Install the modules:

sudo -E make modules_install

Compile the dtb

make dtbs

Install the dtb

cp nvidia-oot/device-tree/platform/generic-dts/dtbs/* $DEVDIR/kernel/dtb/
Install the new kernel image, dtb and modules

1. Copy the kernel image and the modules to the board

You can use for this the scp and rsync commands:


Info

We are using the tegra234-p3737-0000+p3701-0005-nv.dtb dtb in this case. If you are using a different dtb file, you can verify the changes are included, using this command:

dtc -I dtb -O dts -o <path to save dts>/devicetree.dts <dtb file you will use>
And check that the pcie@141a0000 node on the devicetree.dts file does not contain the properties: dma-coherent, iommu-map-mask, iommu-map, iommus.


# This command is used instead of scp so the symbolic links are copied and not the files they point to
rsync -Wac --progress ../rootfs/lib/modules/5.15.136-tegra/* <board username>@<board IP>:/tmp/5.15.136-tegra/

scp $DEVDIR/rootfs/boot/Image <board username>@<board IP>:/tmp

scp ../kernel/dtb/tegra234-p3737-0000+p3701-0005-nv.dtb <board username>@<board IP>:/tmp

2. Apply changes on the board

Use these commands on the board:

cd /lib/modules
# Create a backup of the modules
sudo mv 5.15.136-tegra 5.15.136-tegra-bu
sudo mv /tmp/5.15.136-tegra/ ./5.15.136-tegra-uio-supported
sudo rm -rf 5.15.136-tegra-uio-supported/source
sudo rm -rf 5.15.136-tegra-uio-supported/build
sudo ln -s /usr/src/linux-headers-5.15.136-tegra-ubuntu22.04_aarch64/3rdparty/canonical/linux-jammy/kernel-source 5.15.136-tegra-uio-supported/build
sudo ln -s 5.15.136-tegra-uio 5.15.136-tegra

cd /boot
# Create a backup of the kernel image
sudo mv Image Image-bu
sudo mv /tmp/Image Image-uio-supported
sudo ln -s Image-uio Image

sudo mv tegra234-p3737-0000+p3701-0005-nv.dtb tegra234-p3737-0000+p3701-0005-nv.dtb-bu
sudo mv /tmp/tegra234-p3737-0000+p3701-0005-nv.dtb tegra234-p3737-0000+p3701-0005-nv.dtb-iommu-disabled
sudo ln -s tegra234-p3737-0000+p3701-0005-nv.dtb-iommu-disabled tegra234-p3737-0000+p3701-0005-nv.dtb

Change on the file /boot/extlinux/extlinux.conf the FDT entry, or add it if is missing:

LABEL primary
      ...
      FDT /boot/tegra234-p3737-0000+p3701-0005-nv.dtb
      ...

3. Reboot the board

4. Load the driver module

Use this command (on the board) to load the uio_pci_generic driver:

sudo modprobe uio_pci_generic

Running a Sample Application

1. Load the Driver and Bind the NIC


Info
Remember to follow the steps from Using NVIDIA Jetson Modules/DPDK Installation to install DPDK


1.1. Run the following command to check the NIC information:

dpdk-devbind.py -s

It will give a similar output to this:

Network devices using kernel driver
===================================
0001:01:00.0 'RTL8822CE 802.11ac PCIe Wireless Network Adapter c822' if=wlan0 drv=rtl88x2ce unused=rtl8822ce,vfio-pci,uio_pci_generic 
0005:01:00.0 'I210 Gigabit Network Connection 1533' if=eth0 drv=igb unused=vfio-pci,uio_pci_generic

No 'Baseband' devices detected
==============================

No 'Crypto' devices detected
============================

No 'DMA' devices detected
=========================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

No 'Misc (rawdev)' devices detected
===================================

No 'Regex' devices detected
===========================

No 'ML' devices detected
========================

Where in this case the target NIC is the Ethernet card, which has the interface name eth0 and the PCI address 0005:01:00.0.

1.2. Run the following commands to prepare your NIC for DPDK:

sudo ifconfig eth0 down
sudo dpdk-devbind.py --bind=uio_pci_generic 0005:01:00.0

1.3. Run the following command to allocate hugepages for DPDK to use:

sudo dpdk-hugepages.py --pagesize 2M --setup 256M --node 0

2. Verify with an Example

Use the ethtool example application to verify everything is working correctly:

cd <Installation path>/dpdk-stable-23.11.2/<build dir>/examples
sudo ./dpdk-ethtool

Expected output should include the NIC being successfully detected and initialized. The sample app is a command line tool, use the command drvinfo to see the driver information, similar to:

EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0005:01:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Number of NICs: 1
Init port 0..
EthApp> drvinfo
Port 0 driver: net_e1000_igb (ver: DPDK 23.11.2)
firmware-version: 3.16, 0x800004ff, 1.304.0
bus-info: 0005:01:00.0
EthApp> Closing port 0... Done